Skip to content

feat: add clickhouse-bench with auto-downloaded ClickHouse binary#6736

Open
fastio wants to merge 13 commits intovortex-data:developfrom
fastio:integration-clickhouse-benchmark-baseline
Open

feat: add clickhouse-bench with auto-downloaded ClickHouse binary#6736
fastio wants to merge 13 commits intovortex-data:developfrom
fastio:integration-clickhouse-benchmark-baseline

Conversation

@fastio
Copy link

@fastio fastio commented Mar 2, 2026

Introduce a new clickhouse-bench benchmark crate that runs ClickBench queries against Parquet data via clickhouse-local, providing a baseline for comparing Vortex performance against ClickHouse.

Key design decisions:

  • build.rs auto-downloads the full ClickHouse binary (with Parquet support) into target/clickhouse-local/, similar to how vortex-duckdb downloads the DuckDB library. This eliminates manual install steps and avoids issues with slim/homebrew builds lacking Parquet support.
  • The binary path is baked in via CLICKHOUSE_BINARY env at compile time; CLICKHOUSE_LOCAL env var allows runtime override.
  • ClickHouse-dialect SQL queries are maintained in a separate clickbench_clickhouse_queries.sql file (43 queries).
  • CI workflows updated to include clickhouse:parquet target in ClickBench benchmarks and conditionally build clickhouse-bench.

#6425

@myrrc myrrc self-requested a review March 2, 2026 10:32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this file is it difference to the already included one?

Copy link
Author

@fastio fastio Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! I have removed the duplicate clickbench_clickhouse_queries.sql and validated with cargo check -p vortex-bench.

Copy link
Contributor

@myrrc myrrc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think downloading untrusted binaries from internet via a build script is a good idea. We want first-class integration with duckdb thus we need to download its sources (although I'd not do it in build script as well), but we don't need such integration with Clickhouse yet.

My idea is to use clickhouse binary in CI (as it runs on Linux only) and require users to download it by hand if they want a local run. Benchmarking on MacOS doesn't make much sense anyway as vectorized instrustion set is different.

@fastio
Copy link
Author

fastio commented Mar 3, 2026

I don't think downloading untrusted binaries from internet via a build script is a good idea. We want first-class integration with duckdb thus we need to download its sources (although I'd not do it in build script as well), but we don't need such integration with Clickhouse yet.

My idea is to use clickhouse binary in CI (as it runs on Linux only) and require users to download it by hand if they want a local run. Benchmarking on MacOS doesn't make much sense anyway as vectorized instrustion set is different.

Agreed — removed the binary download from build.rs entirely. The clickhouse binary is now resolved at runtime: via CLICKHOUSE_BINARY env var or from $PATH. CI installs it via the official installer before building. Local users need to install it manually. No more untrusted binary downloads in the build script.

- name: Install ClickHouse
if: contains(matrix.targets, 'clickhouse:')
run: |
curl https://clickhouse.com/ | sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not download the latest release file for our architecture from Github releases? We then don't need any installation and curl in general.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call — updated CI to download the static binary directly from GitHub Releases (pined ClickHouse to LTS release v25.8.18.1 from GitHub Releases), no curl | sh or installation needed.

@fastio fastio requested review from joseph-isaacs and myrrc March 9, 2026 02:22
return query.to_string();
}

strip_simple_identifier_quotes(query)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clickhouse does handle quoted identifiers correctly so I think we can pass them through to reduce this PR's diff.

@myrrc
Copy link
Contributor

myrrc commented Mar 9, 2026

The changes look good to me conceptually, let's see what the CI run says.

@myrrc myrrc self-assigned this Mar 9, 2026
@fastio fastio force-pushed the integration-clickhouse-benchmark-baseline branch from 82c11d1 to 60aa2d2 Compare March 10, 2026 13:13
fastio added 6 commits March 10, 2026 21:27
Introduce a new clickhouse-bench benchmark crate that runs ClickBench
queries against Parquet data via clickhouse-local, providing a baseline
for comparing Vortex performance against ClickHouse.

Key design decisions:
- build.rs auto-downloads the full ClickHouse binary (with Parquet
  support) into target/clickhouse-local/, similar to how vortex-duckdb
  downloads the DuckDB library. This eliminates manual install steps
  and avoids issues with slim/homebrew builds lacking Parquet support.
- The binary path is baked in via CLICKHOUSE_BINARY env at compile time;
  CLICKHOUSE_LOCAL env var allows runtime override.
- ClickHouse-dialect SQL queries are maintained in a separate
  clickbench_clickhouse_queries.sql file (43 queries).
- CI workflows updated to include clickhouse:parquet target in
  ClickBench benchmarks and conditionally build clickhouse-bench.

Signed-off-by: fastio <pengjian.uestc@gmail.com>
…dling

Signed-off-by: fastio <pengjian.uestc@gmail.com>
…use from PATH

- Remove reqwest-based binary download from build.rs
- Resolve clickhouse binary via CLICKHOUSE_BINARY env var or $PATH at runtime
- Add CI step to install clickhouse before building when needed
- Fail with clear error message if binary is not found locally

Signed-off-by: fastio <pengjian.uestc@gmail.com>
- Pass subcommand arg to clickhouse-bench in run-sql-bench.sh for consistency
- Use BenchmarkArg + create_benchmark() in main.rs like other engines
- Replace `which` with `clickhouse local --version` for binary verification
- Pin ClickHouse to LTS release v25.8.18.1 from GitHub Releases

Signed-off-by: fastio <pengjian.uestc@gmail.com>
…identifier handling.

Queries are now returned as-is without dialect-specific transformation.

Signed-off-by: fastio <pengjian.uestc@gmail.com>
Signed-off-by: fastio <pengjian.uestc@gmail.com>
@fastio fastio force-pushed the integration-clickhouse-benchmark-baseline branch from 60aa2d2 to 5aa201a Compare March 10, 2026 13:28
Signed-off-by: Peng Jian <pengjian.uestc@gmail.com>
@codspeed-hq
Copy link

codspeed-hq bot commented Mar 10, 2026

Merging this PR will degrade performance by 20.9%

❌ 3 regressed benchmarks
✅ 1026 untouched benchmarks
⏩ 1466 skipped benchmarks1

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Mode Benchmark BASE HEAD Efficiency
Simulation take_200k_dispersed 3.6 ms 4.5 ms -19.62%
Simulation patched_take_200k_first_chunk_only 4.8 ms 5.4 ms -10.69%
Simulation take_200k_first_chunk_only 3.3 ms 4.2 ms -20.9%

Comparing fastio:integration-clickhouse-benchmark-baseline (14df836) with develop (ba77972)

Open in CodSpeed

Footnotes

  1. 1466 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@connortsui20
Copy link
Contributor

connortsui20 commented Mar 10, 2026

@fastio Feel free to ping us in the public slack channel if you want us to run CI for you! (Feel free to ping me here as well)

Edit: To fix the CI issues right now, could you update the lockfile with cargo check?

Signed-off-by: fastio <pengjian.uestc@gmail.com>
@fastio
Copy link
Author

fastio commented Mar 11, 2026

@fastio Feel free to ping us in the public slack channel if you want us to run CI for you! (Feel free to ping me here as well)

Edit: To fix the CI issues right now, could you update the lockfile with cargo check?

@connortsui20 Thanks! I've updated the lockfile by running cargo check. The Cargo.lock should now be in sync. Let me know if CI still has issues after this.

@myrrc myrrc self-requested a review March 11, 2026 13:17
@connortsui20
Copy link
Contributor

@fastio you can click on the github action that is failing (in this case it is the "CI / Rust (list) (pull_request)" action) and see the problem. Let me know if you need help with this!

Copy link
Contributor

@0ax1 0ax1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unclear yet, whether we actually want to maintain this integration. Will loop back with the team.

@connortsui20 connortsui20 requested a review from 0ax1 March 11, 2026 13:29
Copy link
Contributor

@connortsui20 connortsui20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before addressing some of these comments, can you fix the clippy lint? We should actually run these benchmarks to see if it even works.

Comment on lines +181 to +201
let time_instant = Instant::now();

// The `clickhouse` binary is a multi-tool; invoke it as `clickhouse local`.
let mut child = Command::new(&self.binary)
.args(["local", "--format", "TabSeparated"])
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.context("Failed to spawn clickhouse-local")?;

// Write SQL to stdin
{
let stdin = child
.stdin
.as_mut()
.context("Failed to open clickhouse-local stdin")?;
stdin
.write_all(full_sql.as_bytes())
.context("Failed to write SQL to clickhouse-local stdin")?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is right. If you look at the other benchmark engine setup, we do not open an entire new process and write to stdin, which means this engine is going to have wildly different characteristics to the rest of the engines.

I also don't know of a good way to solve this since I have not worked much with Clickhouse before. Do you have any ideas how to make this more consistent with the other engines?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No but we essentially do this. DuckDB we close and re-open the database. DataFusion we start a new session. etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is true (or not true anymore?). In our benchmarks/datafusion-bench/src/main.rs and benchmarks/duckdb-bench/src/lib.rs we create the connections and only time the query

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a fair point — the current timing does include process startup overhead. This is a constraint of clickhouse-local's non-interactive mode, which reads all of stdin before executing any queries, so I can't keep a persistent connection the way DuckDB and DataFusion do.

That said, the process startup cost is relatively small compared to query execution time (especially for ClickBench queries). If we want to isolate it, I could run a no-op query first and subtract the baseline, or we could accept the small overhead as part of the "no caching" trade-off. Let me know what you'd prefer.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right that DuckDB and DataFusion create connections once and time only the query execution. The per-query process spawning is an architectural constraint of clickhouse-local — in piped stdin mode, it reads all of stdin before executing anything, which makes a persistent-process model impossible (I've documented this in the module-level docs).

That said, the overhead is minimal: process spawn is ~1ms, and the CREATE VIEW statements are lightweight references to Parquet files via file(). For ClickBench queries running 100ms+, this adds <1% overhead. Happy to explore alternatives like --queries-file if you think it's worth it, but I believe the current approach is pragmatically fine.

Comment on lines +169 to +174
// Build the full SQL: setup views + the actual query
let mut full_sql = String::new();
for stmt in &self.setup_sql {
full_sql.push_str(stmt);
full_sql.push('\n');
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems strange to me, if this is setup work then why are we timing it in our measurement?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The setup SQL (CREATE VIEW statements) is indeed being timed along with the actual query. I'll separate the setup phase from the measurement phase to be consistent with how DuckDB and DataFusion
handle this.

Comment on lines +132 to +133
# ClickHouse-bench only runs for local benchmarks (clickhouse-local reads local files).
if ! $is_remote && [[ "$has_clickhouse" == "true" ]] && [[ -f "target/release_debug/clickhouse-bench" ]]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you look above at the lance setup code, we have a better guard against benching something on remote that is not supposed to be, can you mimic that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch — the ^clickhouse: anchor only matched when clickhouse was the first target in the comma-separated string. Dropped the ^ from both the remote guard and the has_clickhouse detection to match the lance pattern.

[dependencies]
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
tokio = { workspace = true, features = ["full"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I doubt you need "full" features here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — removed features = ["full"] from tokio. The benchmark only uses tokio::runtime::Runtime for one async call (generate_base_data), so workspace-level features are sufficient.

Comment on lines +39 to +45
fn queries_file_path(&self) -> PathBuf {
if let Some(file) = &self.queries_file {
return file.into();
}
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
manifest_dir.join("clickbench_queries.sql")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This refactoring seems somewhat unnecessary

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify what you're referring to here? This file (vortex-bench/src/clickbench/benchmark.rs) has no changes in the current PR — the queries_file_path() method and queries_file field already exist on develop. This comment may have been targeting an earlier revision that was rebased out. Let me know if there's something specific you'd like addressed!

@fastio
Copy link
Author

fastio commented Mar 12, 2026

It's unclear yet, whether we actually want to maintain this integration. Will loop back with the team.

Hi @0ax1, this PR is Phase 1 of the plan discussed in #6425 with @gatesn. The idea was to first integrate ClickHouse into the ClickBench benchmarking framework
(Parquet baseline only) before submitting the full vortex-clickhouse crate.

As @gatesn put it: "I think we should also look to at Clickbench to the benchmarking framework first, even without support for Vortex. That will give us some sanity checks on the PR that we are doing things
sensibly regarding both performance and correctness."

This PR intentionally has a narrow scope — it only adds ClickHouse as a benchmark engine running against Parquet via clickhouse-local. The maintenance burden is minimal: one benchmark crate with no custom
library bindings. Happy to discuss further if the team has concerns!

@gatesn
Copy link
Contributor

gatesn commented Mar 13, 2026

Just to confirm, I'm happy to go ahead and merge this. Sounds like there's some open questions about where to put the benchmark timings.

Also it's worth making sure the caching behavior is consistent with our other benchmarks. We haven't actually documented precisely what this is yet, but typically means no caching at all, even things like footers.

@fastio
Copy link
Author

fastio commented Mar 16, 2026

Thanks for confirming the merge! On the two points:

Benchmark timings: Could you clarify what you have in mind? Are you referring to where the timing results should be reported (e.g., stdout, a results file, CI artifacts), or how the timing logic is organized in code? Happy to follow whatever convention you prefer.

Caching behavior: Since clickhouse-local spawns a fresh process per query, there's no cross-query caching at all — no metadata cache, no footer cache, no warm-up effects. This is actually stricter than DuckDB's current setup (which has parquet_metadata_cache = true). If we want to formalize the "no caching" policy across all engines, I can open a follow-up issue to track that.

@fastio
Copy link
Author

fastio commented Mar 16, 2026

@gatesn Thanks for confirming! On caching: the per-process model is actually the most cache-free approach of all our engines. Each query spawns a fresh clickhouse-local process with zero state — no metadata cache, no footer cache, no in-memory buffers. The CREATE VIEW statements just create references via file() without pre-reading.

The only shared layer is the OS page cache, which affects all engines equally. So the caching behavior is at least as strict as our other benchmarks (arguably stricter than DuckDB, which enables parquet_metadata_cache).

On timing placement — currently results go to ch-results.json and get appended to results.json alongside other engines. Let me know if you'd prefer a different approach.

Remove ^ anchor from clickhouse grep patterns to match targets
regardless of position in the comma-separated string, consistent
with the lance guard pattern.

Signed-off-by: fastio <pengjian.uestc@gmail.com>
@fastio fastio force-pushed the integration-clickhouse-benchmark-baseline branch from 525d1d0 to 42a591c Compare March 16, 2026 13:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants